Skip to content

Feature/connected pipeline#147

Open
omsherikar wants to merge 16 commits intomainfrom
feature/connected-pipeline
Open

Feature/connected pipeline#147
omsherikar wants to merge 16 commits intomainfrom
feature/connected-pipeline

Conversation

@omsherikar
Copy link
Copy Markdown
Contributor

@omsherikar omsherikar commented Apr 3, 2026

Summary by CodeRabbit

  • New Features

    • Session-based workflow: analyses create persistent sessions with per-severity counts; sessions can be listed, inspected, and referenced by ID.
    • New CLI commands/options: one-shot run (analyze+fix), status (inspect sessions), session-aware autofix/rollback, and a --fix-on option to control which fixes are queued.
    • Analyze now reports session ID and queued fix counts.
  • Tests

    • Added comprehensive tests for pipeline, sessions, queueing, apply/dry-run, status, run, autofix, and rollback.

omsherikar and others added 9 commits April 3, 2026 20:42
Introduces PipelineSession, FixQueueItem, SessionState, FixStatus
dataclasses and a SessionStore that persists/loads session JSON under
<root>/.refactron/sessions/. 8 TDD tests all green, flake8 clean.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… ID mismatch warning

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Implements RefactronPipeline in refactron/core/pipeline.py connecting
the analyze → fix queue → verify → write pipeline with session persistence.
Adds 6 passing tests in tests/test_pipeline_orchestrator.py.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Wire refactron analyze to create and persist a PipelineSession from the
existing AnalysisResult (no re-analysis), print the session ID at the
bottom of output, and optionally queue matching issues via --fix-on.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Wire the autofix command to the RefactronPipeline: supports --session
<id> to resume a queued session, or falls back to fresh analysis when
only a target path is given. Updates autofix tests to mock the pipeline.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Implements Task 5 of the connected pipeline: a `refactron status` CLI
command that reads PipelineSession state from SessionStore and displays
analysis summary, fix queue progress, and next-step hints.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds --pipeline-session option to refactron rollback command, enabling
rollback of a specific pipeline session tracked by PipelineSession/SessionStore.
Updates session state to ROLLED_BACK after successful restore.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Implements the `refactron run` CLI command that chains analyze → queue →
apply in a single invocation, with --dry-run, --fix-on, --no-verify, and
--fail-on gates. Registers the command in main.py and adds 3 passing tests.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds tests/test_pipeline_integration.py with 4 end-to-end tests covering
analyze → queue → apply roundtrip, session persistence, blocked-fix
recording, and min_level filtering. Applies black/isort formatting to all
connected-pipeline source and test files to satisfy the linting gate.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 3, 2026 19:33
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 3, 2026

Warning

Rate limit exceeded

@omsherikar has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 9 minutes and 22 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 9 minutes and 22 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: aaa5e165-e486-42e9-a2fd-69745689866c

📥 Commits

Reviewing files that changed from the base of the PR and between 8700632 and 4dda068.

📒 Files selected for processing (2)
  • refactron/cli/refactor.py
  • refactron/core/pipeline.py
📝 Walkthrough

Walkthrough

Adds a session-driven pipeline: new RefactronPipeline, PipelineSession and SessionStore persistence, session-aware queue/apply semantics, new CLI commands (run, status) and session integration in analyze/autofix/rollback, plus tests for orchestration and session persistence.

Changes

Cohort / File(s) Summary
CLI Analysis with Session Queuing
refactron/cli/analysis.py
analyze gained --fix-on, now forces non-incremental analysis unconditionally, builds/saves a PipelineSession from analysis totals, enqueues all issues into the session and prints session id and queued counts (note: fix_on mapping not wired into queueing here).
CLI Command Registration
refactron/cli/main.py
Conditionally register status and run subcommands via guarded imports.
CLI Autofix & Rollback Sessions
refactron/cli/refactor.py
Made autofix target optional, added --session and --fix-on; new pipeline-first flow (load or create session, queue items, apply with pipeline.apply). rollback got --pipeline-session and an early-exit backup-based rollback path.
New One‑Shot Pipeline Command
refactron/cli/run.py
Added run command that builds a pipeline, runs analysis, queues issues by --fix-on, previews/applies fixes (--dry-run/--no-verify), persists session, prints session id and undo hint, and supports --fail-on exit gating.
New Session Status Inspection Command
refactron/cli/status.py
Added status command to list sessions and show a session’s metadata, per-level issue counts and fix counts, blocked-fix details, and next-step hints.
Pipeline Orchestration & Session Model
refactron/core/pipeline.py, refactron/core/pipeline_session.py
New RefactronPipeline with analyze, queue_issues, apply; new session model: PipelineSession, FixQueueItem, SessionState, FixStatus, and SessionStore reading/writing JSON under .refactron/ with current/latest helpers and session id generation.
Tests — CLI & Pipeline
tests/...
tests/test_cli.py, tests/test_cli_run.py, tests/test_cli_status.py, tests/test_pipeline_orchestrator.py, tests/test_pipeline_integration.py, tests/test_pipeline_session.py
Added and updated tests to cover run/status, session-aware autofix/rollback, SessionStore serialization/listing, queueing/filtering, apply/dry-run/blocked behavior, and end-to-end pipeline flows.

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI
    participant Pipeline
    participant SessionStore
    participant FixEngine
    participant FileSystem

    User->>CLI: refactron analyze --fix-on ERROR
    CLI->>Pipeline: analyze(target)
    Pipeline->>FileSystem: scan files / detect issues
    Pipeline->>SessionStore: save(session {ANALYZED})
    SessionStore-->>Pipeline: session_id
    Pipeline-->>CLI: session_id
    CLI->>Pipeline: queue_issues(session, issues[, min_level])
    Pipeline->>SessionStore: update(session.fix_queue)
    CLI->>User: print session_id and queued count

    User->>CLI: refactron run/autofix --session <id> --dry-run?
    CLI->>SessionStore: load(session_id) / load_current()
    SessionStore-->>CLI: session (with fix_queue)
    CLI->>Pipeline: apply(session, dry_run, verify)
    Pipeline->>FixEngine: fix_file(file, issues_for_file)
    FixEngine->>FileSystem: read/write/backup files
    Pipeline->>SessionStore: save(updated session {FIXED/APPLIED/BLOCKED})
    CLI->>User: print applied/blocked/skipped counts and session id
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰
I hopped through files at break of day,
I queued the issues, then leapt away.
Preview, apply, or roll back fast—
A session saved to make fixes last.
Hop on, run, and watch the patches play!

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.31% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Feature/connected pipeline' is vague and generic, using non-descriptive branch-naming convention rather than describing the actual changes made in the changeset. Use a descriptive title that explains the main change, such as: 'Add pipeline session workflow with analyze, queue, and apply stages' or 'Implement end-to-end pipeline architecture with session management'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/connected-pipeline

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Introduces a “connected pipeline” concept by adding a persisted PipelineSession model and a new RefactronPipeline orchestrator, then wiring CLI commands to create, inspect, and run sessions end-to-end.

Changes:

  • Added PipelineSession/SessionStore JSON persistence under <project_root>/.refactron/sessions/.
  • Implemented RefactronPipeline (analyze → queue → apply) and added new CLI commands status and run.
  • Updated analyze and autofix CLIs plus added tests covering session persistence and new commands.

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
tests/test_pipeline_session.py New unit tests for session model serialization and SessionStore.
tests/test_pipeline_orchestrator.py New tests for RefactronPipeline analyze/queue/apply orchestration behavior.
tests/test_pipeline_integration.py New end-to-end style tests for analyze→queue→apply flows and edge cases.
tests/test_cli.py Updates autofix CLI tests to mock the new pipeline usage.
tests/test_cli_status.py New tests for refactron status command output and options.
tests/test_cli_run.py New tests for refactron run command behavior and exit codes.
refactron/core/pipeline.py New pipeline orchestrator for session-aware analysis and autofix application.
refactron/core/pipeline_session.py New persisted session state model + JSON store utilities.
refactron/cli/status.py New CLI command to display latest/specific session and fix counts.
refactron/cli/run.py New one-shot CLI command to analyze, queue, apply, and optionally fail CI.
refactron/cli/refactor.py Adds session-aware autofix behavior and pipeline-session rollback support.
refactron/cli/main.py Registers the new status and run commands.
refactron/cli/analysis.py Emits a persisted pipeline session after analysis (optionally auto-queueing fixes).
Comments suppressed due to low confidence (1)

refactron/cli/refactor.py:360

  • The new session-aware branch returns at line 355, leaving the entire original autofix implementation (setup/config/safety mapping, etc.) unreachable dead code. This is error-prone (future edits may land in the dead branch) and also makes options like --safety-level appear implemented but effectively ignored. Please remove the unreachable block or refactor so there is a single coherent codepath.
    if dry_run:
        console.print("\n[bold]Dry-run complete[/bold]")
        for _item in _pipeline_session.fix_queue:
            if _item.diff:
                console.print(
                    f"\n  [cyan]{_item.file_path}:{_item.line_number}[/cyan] {_item.message}"
                )
                console.print(_item.diff)
    else:
        console.print(f"\n[bold green]Applied:[/bold green]  {_applied}")
        if _blocked:
            console.print(f"[bold red]Blocked:[/bold red]  {_blocked}")
        if _skipped:
            console.print(f"[dim]Skipped:  {_skipped}[/dim]")
        console.print(f"\n[dim]Session: {_pipeline_session.session_id}[/dim]")
        if _applied > 0:
            console.print(
                f"[dim]To undo: refactron rollback --session "
                f"{_pipeline_session.session_id}[/dim]"
            )
    return

    # Setup
    target_path = _validate_path(target)
    _load_config(config, profile, environment)
    _print_file_count(target_path)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 278 to +329
@@ -284,6 +294,66 @@ def autofix(
_auth_banner("Auto-fix")
console.print()

# ── Session-aware pipeline ────────────────────────────────────────
from refactron.core.pipeline import RefactronPipeline

_target_path = Path(target) if target else Path.cwd()
_project_root = _target_path if _target_path.is_dir() else _target_path.parent
_pipeline = RefactronPipeline(project_root=_project_root)

if session_id:
_pipeline_session = _pipeline.store.load(session_id)
if _pipeline_session is None:
console.print(f"[red]Session not found: {session_id}[/red]")
raise SystemExit(1)
_queued = len([i for i in _pipeline_session.fix_queue if i.status.value == "pending"])
console.print(
f"[dim]Loaded session {session_id} · {_queued} items queued[/dim]"
)
else:
if not target:
console.print("[red]Provide --session <id> or a target path.[/red]")
raise SystemExit(1)
console.print("[dim]No session — running fresh analysis...[/dim]")
_pipeline_session = _pipeline.analyze(_target_path)
if _pipeline._last_result:
_all_issues = [
i for fm in _pipeline._last_result.file_metrics for i in fm.issues
]
_pipeline.queue_issues(_pipeline_session, _all_issues)

_pipeline.apply(
_pipeline_session,
dry_run=dry_run,
verify=verify,
)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

autofix currently ignores the --preview/--apply flag: it always calls _pipeline.apply(..., dry_run=dry_run) regardless of preview. Since preview defaults to True and dry_run defaults to False, running refactron autofix <target> would write changes by default. Please gate writes on preview (e.g., treat preview as dry-run) so the default behavior remains non-destructive unless --apply is explicitly provided.

Copilot uses AI. Check for mistakes.

_target_path = Path(target) if target else Path.cwd()
_project_root = _target_path if _target_path.is_dir() else _target_path.parent
_pipeline = RefactronPipeline(project_root=_project_root)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

autofix accepts --safety-level, but the session-aware implementation constructs RefactronPipeline(project_root=_project_root) without passing the chosen risk level to the underlying AutoFixEngine. This makes --safety-level a no-op for the new flow. Please map the CLI string to FixRiskLevel and pass it into RefactronPipeline(..., safety_level=...) (or otherwise enforce the safety level).

Suggested change
_pipeline = RefactronPipeline(project_root=_project_root)
_safety_level_map = {
"conservative": FixRiskLevel.CONSERVATIVE,
"moderate": FixRiskLevel.MODERATE,
"aggressive": FixRiskLevel.AGGRESSIVE,
}
_selected_safety_level = _safety_level_map.get(
str(safety_level).lower(), FixRiskLevel.MODERATE
)
_pipeline = RefactronPipeline(
project_root=_project_root, safety_level=_selected_safety_level
)

Copilot uses AI. Check for mistakes.
console.print(f"\n[dim]Session: {_pipeline_session.session_id}[/dim]")
if _applied > 0:
console.print(
f"[dim]To undo: refactron rollback --session "
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the non-dry-run summary, the rollback hint uses refactron rollback --session <id>, but the pipeline-session rollback option added below is --pipeline-session. As written, the printed command will not roll back the pipeline session. Please update the hint to match the actual option name for pipeline sessions.

Suggested change
f"[dim]To undo: refactron rollback --session "
f"[dim]To undo: refactron rollback --pipeline-session "

Copilot uses AI. Check for mistakes.
Comment on lines +461 to +478
if pipeline_session_id:
from refactron.core.backup import BackupManager
from refactron.core.pipeline_session import SessionState, SessionStore

_store = SessionStore(root_dir=Path.cwd())
_pipeline_session = _store.load(pipeline_session_id)
if _pipeline_session is None:
console.print(f"[red]Session not found: {pipeline_session_id}[/red]")
raise SystemExit(1)
if not _pipeline_session.applied_fixes:
console.print("[yellow]No applied fixes in this session to roll back.[/yellow]")
return
if not _pipeline_session.backup_session_id:
console.print("[red]Session has no backup ID — cannot roll back.[/red]")
raise SystemExit(1)

_bm = BackupManager(root_dir=Path.cwd())
_restored_count, _failed = _bm.rollback_session(_pipeline_session.backup_session_id)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pipeline-session rollback loads sessions from SessionStore(root_dir=Path.cwd()), which will fail if the user runs the command from a different directory than the project root where the session was saved. Consider accepting/using a --project-root option (like refactron status does) or searching upward for .refactron/sessions similarly to how the backup rollback code searches for .refactron/backups.

Copilot uses AI. Check for mistakes.
Comment on lines +109 to +119
def _find_fixer_name(self, issue: CodeIssue) -> Optional[str]:
if not self.fix_engine.can_fix(issue):
return None
for name, fixer in self.fix_engine.fixers.items():
try:
result = fixer.preview(issue, "x = 1\n")
if result.success:
return name
except Exception:
continue
return None
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_find_fixer_name calls self.fix_engine.can_fix(issue), but AutoFixEngine.can_fix only returns True when issue.rule_id matches a fixer name. Analyzer rule IDs (e.g., S004, S006, etc.) generally won’t match fixer names (e.g., extract_magic_numbers, remove_unused_imports), so this will cause nearly everything to be marked SKIPPED and nothing queued. Please introduce an explicit mapping from analyzer rule_id → fixer key (and/or change the selection logic) so queueing works with analyzer-produced issues.

Copilot uses AI. Check for mistakes.
file_path=file_path,
line_number=item.line_number,
column=0,
suggestion=None,
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even when a FixQueueItem has a fixer_name, _queue_items_to_issues reconstructs CodeIssue objects without setting rule_id (and also drops metadata). Since AutoFixEngine.fix_file only applies fixes when issue.rule_id matches a registered fixer, fix_file will skip all issues and diff will always be empty/None. Please propagate the selected fixer key into CodeIssue.rule_id (and preserve any needed metadata) when converting queue items back into issues.

Suggested change
suggestion=None,
suggestion=None,
rule_id=item.fixer_name,
metadata=getattr(item, "metadata", None),

Copilot uses AI. Check for mistakes.
Comment on lines +136 to +142
backup_session_id: Optional[str] = None
if not dry_run and files_to_fix:
backup_session_id = self.backup_manager.create_backup_session(
description=f"autofix session {session.session_id}"
)
session.backup_session_id = backup_session_id

Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apply() creates a backup session ID but never actually backs up any files before writing changes. BackupManager.create_backup_session() only creates the session directory/index entry; without calling backup_file(s) for the files being modified, rollback will have nothing to restore. Please back up all files in files_to_fix before invoking fix_engine.fix_file when dry_run is False.

Copilot uses AI. Check for mistakes.
Comment on lines +143 to +150
for file_path_str, items in files_to_fix.items():
file_path = Path(file_path_str)
if not file_path.exists():
for item in items:
item.status = FixStatus.BLOCKED
item.block_reason = "file not found"
session.blocked_fixes.append(item)
continue
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apply() derives file_path = Path(file_path_str) directly from the stored string. If sessions persist relative paths (common when target is passed as a relative path), resuming a session from a different working directory will break (files appear missing → fixes get BLOCKED). Consider storing paths relative to project_root and resolving against it at apply-time, or storing absolute resolved paths when queuing.

Copilot uses AI. Check for mistakes.
Comment on lines +145 to +163
if not file_path.exists():
for item in items:
item.status = FixStatus.BLOCKED
item.block_reason = "file not found"
session.blocked_fixes.append(item)
continue

issues_for_file = self._queue_items_to_issues(items, file_path)
try:
fixed_code, diff = self.fix_engine.fix_file(
file_path, issues_for_file, dry_run=dry_run, verify=verify
)
except Exception as exc:
logger.warning("fix_file failed for %s: %s", file_path, exc)
for item in items:
item.status = FixStatus.BLOCKED
item.block_reason = str(exc)
session.blocked_fixes.append(item)
continue
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apply() appends the same FixQueueItem objects to session.applied_fixes / session.blocked_fixes but also leaves them in session.fix_queue. This duplicates entries across lists in the persisted session JSON and makes it harder to reason about counts/state. Consider either (a) keeping all items only in fix_queue and deriving views from status, or (b) moving items out of fix_queue when they become applied/blocked (so each item lives in exactly one list).

Copilot uses AI. Check for mistakes.
Comment on lines +120 to +129
# --fail-on gate
if fail_on:
threshold = _LEVEL_RANK[fail_on.upper()]
should_fail = any(
session.issues_by_level.get(lvl, 0) > 0
for lvl, rank in _LEVEL_RANK.items()
if rank >= threshold
)
if should_fail:
raise SystemExit(1)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The --fail-on gate checks session.issues_by_level, but that data comes from the initial analysis and is never recomputed after fixes are applied. This means refactron run --fail-on ... can exit 1 even if fixes were applied successfully, or exit 0 even if critical issues remain. Please rerun analysis after apply (or otherwise update issues_by_level based on a post-fix analysis) before deciding whether to exit 1.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
refactron/cli/refactor.py (1)

241-262: ⚠️ Potential issue | 🔴 Critical

The new autofix path bypasses the command's safety contract.

Because this branch returns at Line 355, it never reaches the existing option handling below. Right now --preview does not prevent writes (pipeline.apply() only sees dry_run), so the default invocation can modify files, and --safety-level / --config / --profile / --environment are effectively no-ops on this path.

🛠️ Suggested fix
+    safety_map = {
+        "safe": FixRiskLevel.SAFE,
+        "low": FixRiskLevel.LOW,
+        "moderate": FixRiskLevel.MODERATE,
+        "high": FixRiskLevel.HIGH,
+    }
+    effective_dry_run = dry_run or preview
+
     _target_path = Path(target) if target else Path.cwd()
     _project_root = _target_path if _target_path.is_dir() else _target_path.parent
-    _pipeline = RefactronPipeline(project_root=_project_root)
+    _pipeline = RefactronPipeline(
+        project_root=_project_root,
+        safety_level=safety_map[safety_level.lower()],
+    )
@@
     _pipeline.apply(
         _pipeline_session,
-        dry_run=dry_run,
+        dry_run=effective_dry_run,
         verify=verify,
     )
Based on learnings, "All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback".

Also applies to: 297-355, 357-380

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 241 - 262, The early-return branch
bypasses the safety-first flow so options like --preview/--apply,
--safety-level, --config, --profile, --environment and --dry-run are ignored and
pipeline.apply() can write files; change the branch that currently returns to
instead funnel through the existing safety pipeline (preview → backup →
pipeline.apply() → optional rollback) so that --preview causes only a preview
(no writes) and pipeline.apply() respects dry_run and
safety-level/config/profile/environment and that verify is run prior to apply;
locate and update the code around the branch that returns (the block referencing
pipeline.apply(), --preview/--apply, --dry-run, --safety-level, and --verify) to
remove the premature return and invoke the common safety handlers (preview
handling, backup creation, apply call, and rollback path) so all refactors use
the same safety-first sequence.
🧹 Nitpick comments (1)
tests/test_cli_run.py (1)

31-38: These mocks never exercise the new pipeline path.

All three tests set mock_pipeline._last_result = None, which makes run() skip queue_issues() and never call pipeline.apply(). Populate file_metrics and assert the queue/apply interactions so regressions in the actual one-shot flow are caught.

Also applies to: 47-54, 62-68

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_cli_run.py` around lines 31 - 38, The tests currently set
mock_pipeline._last_result = None so run() always skips queue_issues() and
pipeline.apply(); change the mocks for RefactronPipeline in
tests/test_cli_run.py to populate mock_pipeline._last_result with a dict
containing file_metrics (e.g. {'file_metrics': [...]}), call runner.invoke(run,
...) as before, and add assertions that mock_pipeline.queue_issues was called
and mock_pipeline.apply was invoked (or not depending on flags) to ensure the
one-shot path is exercised; update the other two test blocks (lines
corresponding to the other patches) similarly so all three cases assert
queue_issues()/apply() interactions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@refactron/cli/refactor.py`:
- Around line 350-353: The rollback hint prints the wrong flag; update the
string passed to console.print in the blocks that check _applied > 0 (and the
similar block around lines 432-437) to use the new flag name --pipeline-session
instead of --session so the hint points users to the pipeline rollback path;
locate the console.print calls that reference _pipeline_session.session_id and
replace the embedded "--session" token with "--pipeline-session" so the printed
undo command matches the implemented CLI option.
- Around line 300-307: The session/backup lookup is using the caller's CWD
instead of the pipeline's project root; update the code that constructs
SessionStore and BackupManager and the session-loading logic to derive
project_root the same way RefactronPipeline does (use the
_target_path/_project_root logic used when instantiating RefactronPipeline or
use PipelineSession.target when resuming), so SessionStore.load(session_id) and
BackupManager operate against the pipeline's project_root rather than
Path.cwd(); apply the same change to the other affected block referenced (around
the code handling rollback / lines 461-478) so all session and backup resolution
consistently uses RefactronPipeline.project_root or PipelineSession.target.

In `@refactron/cli/run.py`:
- Around line 35-40: The CLI currently performs destructive changes by default;
change the click option for "--dry-run" to default=True (so preview is the
default) and remove the unconditional call to pipeline.apply(... dry_run=False)
in the run command; instead call a non-destructive pipeline.preview(...) after
analysis, and only invoke pipeline.backup(...) then pipeline.apply(...) when an
explicit confirmation/flag (e.g. an "--apply" or interactive yes prompt) is
provided; ensure the flow follows preview → backup → apply → optional rollback
and reference the existing pipeline.apply and pipeline.preview calls in run.py
so the code only writes files when the explicit apply path is taken.
- Around line 120-129: The fail-on check uses stale counts in
session.issues_by_level from the pre-apply analysis; after calling
pipeline.apply() you must recompute severity totals before evaluating --fail-on.
Update the code so that once pipeline.apply() completes you invoke the session
analysis routine that refreshes issue counts (e.g. call the method that re-runs
analysis or recomputes totals on session such that session.issues_by_level is
updated), then compute threshold via _LEVEL_RANK[fail_on.upper()] and evaluate
should_fail against the refreshed session.issues_by_level and raise
SystemExit(1) if appropriate.

In `@refactron/cli/status.py`:
- Around line 27-30: Wrap calls to SessionStore.list_sessions() and
SessionStore.load() in try/except at the CLI boundary (in the status command) so
any IO/Unicode/parse errors from unreadable or non-UTF8 files under
.refactron/sessions/ are caught; catch broad exceptions like
OSError/UnicodeDecodeError/ValueError (or Exception as a last resort), and print
a concise, friendly error message via the CLI logger/console instead of letting
the traceback propagate, then return a non-zero exit code. Locate the calls to
SessionStore.list_sessions() and SessionStore.load() in the status command and
add the try/except blocks around them, ensuring the error message references the
sessions directory/context so users know which file caused the problem.

In `@refactron/core/pipeline_session.py`:
- Around line 149-152: The save() implementation writes session JSON in-place
which can produce a truncated/corrupt newest file; change save(session:
PipelineSession) to write to a temporary file in the same directory (e.g.,
_session_path(session.session_id).with_suffix(".tmp")), flush and fsync the temp
file, then atomically rename/move it to the final path; reference
session.to_dict(), sessions_dir and _session_path. Update load_latest() to not
assume the single lexicographically newest file is valid: list files in
sessions_dir sorted descending, attempt to open and json.loads each in order,
returning the first successfully parsed PipelineSession (skip malformed files
and continue), and only return None if none parse successfully.
- Around line 146-147: Treat the session_id as untrusted data: validate it in
_session_path and any callers (reject if it contains path separators '/' or '\\'
or the segment '..', or use a strict allowlist/sanitization), and when loading a
session (the load() code that reads the JSON) verify the parsed object's
session_id exactly equals the requested session_id and raise an error if not;
ensure all places that build paths from session_id use _session_path so the
validation is centralized and the code fails closed on mismatches or invalid
ids.
- Around line 142-145: The constructor SessionStore.__init__ is missing a return
type annotation which violates disallow_untyped_defs; update the method
signature in class SessionStore (def __init__(self, root_dir: Optional[Path] =
None)) to include an explicit -> None return annotation, and ensure Optional and
Path are imported/available in that module so the type hint resolves.

In `@refactron/core/pipeline.py`:
- Around line 136-174: Before applying fixes when not dry_run, create the backup
session (backup_manager.create_backup_session) and then back up each target file
into that session prior to calling fix_engine.fix_file; after backing files,
call BackupManager.validate_backup_integrity(session.backup_session_id) and if
it returns any corrupt_paths mark all related FixItems as BLOCKED (set
block_reason) and append them to session.blocked_fixes, then skip applying fixes
for those files. Ensure when setting item.backup_path you point to
backup_manager.backup_dir / session.backup_session_id / file_path.name and only
proceed to call fix_engine.fix_file(file_path, ...) if the backup validation
succeeded.
- Around line 96-118: queue_issues() currently probes fixers using a synthetic
snippet and stores minimal info in FixQueueItem, and apply() rebuilds a generic
CodeIssue causing loss of original identity; change FixQueueItem to persist the
original issue payload (rule_id, category, metadata, full message, etc.) and
have _find_fixer_name call each fixer's preview against the real file contents
(load the file at issue.file_path) instead of "x = 1\n"; ensure apply()
reconstructs the CodeIssue from the stored original payload (not as
IssueCategory.COMPLEXITY) before calling AutoFixEngine.fix_file(), so
AutoFixEngine.fix_file() receives the original issue with correct
rule_id/metadata and previews are based on actual file text.

---

Outside diff comments:
In `@refactron/cli/refactor.py`:
- Around line 241-262: The early-return branch bypasses the safety-first flow so
options like --preview/--apply, --safety-level, --config, --profile,
--environment and --dry-run are ignored and pipeline.apply() can write files;
change the branch that currently returns to instead funnel through the existing
safety pipeline (preview → backup → pipeline.apply() → optional rollback) so
that --preview causes only a preview (no writes) and pipeline.apply() respects
dry_run and safety-level/config/profile/environment and that verify is run prior
to apply; locate and update the code around the branch that returns (the block
referencing pipeline.apply(), --preview/--apply, --dry-run, --safety-level, and
--verify) to remove the premature return and invoke the common safety handlers
(preview handling, backup creation, apply call, and rollback path) so all
refactors use the same safety-first sequence.

---

Nitpick comments:
In `@tests/test_cli_run.py`:
- Around line 31-38: The tests currently set mock_pipeline._last_result = None
so run() always skips queue_issues() and pipeline.apply(); change the mocks for
RefactronPipeline in tests/test_cli_run.py to populate
mock_pipeline._last_result with a dict containing file_metrics (e.g.
{'file_metrics': [...]}), call runner.invoke(run, ...) as before, and add
assertions that mock_pipeline.queue_issues was called and mock_pipeline.apply
was invoked (or not depending on flags) to ensure the one-shot path is
exercised; update the other two test blocks (lines corresponding to the other
patches) similarly so all three cases assert queue_issues()/apply()
interactions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 27088f85-ef65-4eb6-bb18-9e7a80229609

📥 Commits

Reviewing files that changed from the base of the PR and between 049fb7b and d75b3ad.

📒 Files selected for processing (13)
  • refactron/cli/analysis.py
  • refactron/cli/main.py
  • refactron/cli/refactor.py
  • refactron/cli/run.py
  • refactron/cli/status.py
  • refactron/core/pipeline.py
  • refactron/core/pipeline_session.py
  • tests/test_cli.py
  • tests/test_cli_run.py
  • tests/test_cli_status.py
  • tests/test_pipeline_integration.py
  • tests/test_pipeline_orchestrator.py
  • tests/test_pipeline_session.py

Comment on lines +300 to +307
_target_path = Path(target) if target else Path.cwd()
_project_root = _target_path if _target_path.is_dir() else _target_path.parent
_pipeline = RefactronPipeline(project_root=_project_root)

if session_id:
_pipeline_session = _pipeline.store.load(session_id)
if _pipeline_session is None:
console.print(f"[red]Session not found: {session_id}[/red]")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Resolve sessions/backups from the pipeline root, not the caller's location.

RefactronPipeline persists sessions under its project_root, but autofix --session derives that root from the current target/cwd and rollback --pipeline-session hard-codes Path.cwd() for both SessionStore and BackupManager. Resuming or rolling back the same session from a subdirectory or another shell location will miss the saved session/backups or target the wrong project. Reuse the same root-discovery logic in both flows, or derive it from PipelineSession.target.

Also applies to: 461-478

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 300 - 307, The session/backup lookup
is using the caller's CWD instead of the pipeline's project root; update the
code that constructs SessionStore and BackupManager and the session-loading
logic to derive project_root the same way RefactronPipeline does (use the
_target_path/_project_root logic used when instantiating RefactronPipeline or
use PipelineSession.target when resuming), so SessionStore.load(session_id) and
BackupManager operate against the pipeline's project_root rather than
Path.cwd(); apply the same change to the other affected block referenced (around
the code handling rollback / lines 461-478) so all session and backup resolution
consistently uses RefactronPipeline.project_root or PipelineSession.target.

Comment on lines +35 to +40
@click.option(
"--dry-run",
is_flag=True,
default=False,
help="Show what would be fixed without writing any files",
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't make run write by default.

The non-dry path calls pipeline.apply(... dry_run=False) immediately after analysis, so this command skips the mandatory preview stage and becomes destructive by default. Please make preview the default outcome and require an explicit apply step or confirmation before any write.

Based on learnings: All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback.

Also applies to: 99-103

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/run.py` around lines 35 - 40, The CLI currently performs
destructive changes by default; change the click option for "--dry-run" to
default=True (so preview is the default) and remove the unconditional call to
pipeline.apply(... dry_run=False) in the run command; instead call a
non-destructive pipeline.preview(...) after analysis, and only invoke
pipeline.backup(...) then pipeline.apply(...) when an explicit confirmation/flag
(e.g. an "--apply" or interactive yes prompt) is provided; ensure the flow
follows preview → backup → apply → optional rollback and reference the existing
pipeline.apply and pipeline.preview calls in run.py so the code only writes
files when the explicit apply path is taken.

Comment on lines +120 to +129
# --fail-on gate
if fail_on:
threshold = _LEVEL_RANK[fail_on.upper()]
should_fail = any(
session.issues_by_level.get(lvl, 0) > 0
for lvl, rank in _LEVEL_RANK.items()
if rank >= threshold
)
if should_fail:
raise SystemExit(1)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

--fail-on is checking the pre-fix snapshot.

session.issues_by_level is populated during the initial analyze and pipeline.apply() never refreshes it, so a successful write run can still exit 1 on the old counts. Recompute the severity totals from a post-apply analysis before evaluating this gate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/run.py` around lines 120 - 129, The fail-on check uses stale
counts in session.issues_by_level from the pre-apply analysis; after calling
pipeline.apply() you must recompute severity totals before evaluating --fail-on.
Update the code so that once pipeline.apply() completes you invoke the session
analysis routine that refreshes issue counts (e.g. call the method that re-runs
analysis or recomputes totals on session such that session.issues_by_level is
updated), then compute threshold via _LEVEL_RANK[fail_on.upper()] and evaluate
should_fail against the refreshed session.issues_by_level and raise
SystemExit(1) if appropriate.

Comment on lines +27 to +30
store = SessionStore(root_dir=Path(project_root))

if list_sessions:
sessions = store.list_sessions()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Catch session-store read failures at the CLI boundary.

SessionStore.list_sessions() / load() only handle a narrow subset of parse errors. An unreadable or non-UTF8 file under .refactron/sessions/ can still raise here and crash refactron status, so wrap these calls and show a friendly error instead of a traceback.

Also applies to: 53-58

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/status.py` around lines 27 - 30, Wrap calls to
SessionStore.list_sessions() and SessionStore.load() in try/except at the CLI
boundary (in the status command) so any IO/Unicode/parse errors from unreadable
or non-UTF8 files under .refactron/sessions/ are caught; catch broad exceptions
like OSError/UnicodeDecodeError/ValueError (or Exception as a last resort), and
print a concise, friendly error message via the CLI logger/console instead of
letting the traceback propagate, then return a non-zero exit code. Locate the
calls to SessionStore.list_sessions() and SessionStore.load() in the status
command and add the try/except blocks around them, ensuring the error message
references the sessions directory/context so users know which file caused the
problem.

Comment on lines +142 to +145
def __init__(self, root_dir: Optional[Path] = None):
self.root_dir = Path(root_dir) if root_dir else Path.cwd()
self.sessions_dir = self.root_dir / self.SESSIONS_DIR

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add the missing return annotation on SessionStore.__init__.

This constructor is still untyped, which is enough to trip disallow_untyped_defs under refactron/.

🛠️ Suggested fix
-    def __init__(self, root_dir: Optional[Path] = None):
+    def __init__(self, root_dir: Optional[Path] = None) -> None:
         self.root_dir = Path(root_dir) if root_dir else Path.cwd()
         self.sessions_dir = self.root_dir / self.SESSIONS_DIR
As per coding guidelines, "`refactron/**/*.py`: Type annotations are required in refactron/ with mypy disallow_untyped_defs = true enabled".
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def __init__(self, root_dir: Optional[Path] = None):
self.root_dir = Path(root_dir) if root_dir else Path.cwd()
self.sessions_dir = self.root_dir / self.SESSIONS_DIR
def __init__(self, root_dir: Optional[Path] = None) -> None:
self.root_dir = Path(root_dir) if root_dir else Path.cwd()
self.sessions_dir = self.root_dir / self.SESSIONS_DIR
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline_session.py` around lines 142 - 145, The constructor
SessionStore.__init__ is missing a return type annotation which violates
disallow_untyped_defs; update the method signature in class SessionStore (def
__init__(self, root_dir: Optional[Path] = None)) to include an explicit -> None
return annotation, and ensure Optional and Path are imported/available in that
module so the type hint resolves.

Comment on lines +146 to +147
def _session_path(self, session_id: str) -> Path:
return self.sessions_dir / f"{session_id}.json"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Treat session_id as data, not a path fragment.

_session_path() interpolates raw session ids into a filesystem path, and load() still returns the parsed object even when its embedded session_id does not match the requested one. A CLI value like ../other/session can escape .refactron/sessions, and a tampered JSON file can still be treated as the requested session. Reject ids that contain path separators / .. and fail closed on mismatches.

🛠️ Suggested fix
+import re
@@
+_SESSION_ID_RE = re.compile(r"^sess_\d{8}_\d{6}_\d{6}$")
+
@@
     def _session_path(self, session_id: str) -> Path:
+        if not _SESSION_ID_RE.fullmatch(session_id):
+            raise ValueError(f"Invalid session id: {session_id}")
         return self.sessions_dir / f"{session_id}.json"
@@
         if session.session_id != session_id:
             _logger.warning(
                 "Session ID mismatch: requested %s, got %s", session_id, session.session_id
             )
-        return session
+            return None
+        return session

Also applies to: 162-166

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline_session.py` around lines 146 - 147, Treat the
session_id as untrusted data: validate it in _session_path and any callers
(reject if it contains path separators '/' or '\\' or the segment '..', or use a
strict allowlist/sanitization), and when loading a session (the load() code that
reads the JSON) verify the parsed object's session_id exactly equals the
requested session_id and raise an error if not; ensure all places that build
paths from session_id use _session_path so the validation is centralized and the
code fails closed on mismatches or invalid ids.

Comment on lines +149 to +152
def save(self, session: PipelineSession) -> None:
self.sessions_dir.mkdir(parents=True, exist_ok=True)
path = self._session_path(session.session_id)
path.write_text(json.dumps(session.to_dict(), indent=2), encoding="utf-8")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Persist session files atomically and fall back to the newest valid one.

save() writes JSON in place, and load_latest() only attempts to parse the single lexicographically newest file. If the process dies mid-write, that newest session becomes malformed and every load_latest() caller starts returning None even though older valid sessions still exist.

🛠️ Suggested fix
     def save(self, session: PipelineSession) -> None:
         self.sessions_dir.mkdir(parents=True, exist_ok=True)
         path = self._session_path(session.session_id)
-        path.write_text(json.dumps(session.to_dict(), indent=2), encoding="utf-8")
+        tmp_path = path.with_suffix(".json.tmp")
+        tmp_path.write_text(json.dumps(session.to_dict(), indent=2), encoding="utf-8")
+        tmp_path.replace(path)
@@
     def load_latest(self) -> Optional[PipelineSession]:
         if not self.sessions_dir.exists():
             return None
         paths = sorted(self.sessions_dir.glob("*.json"))
         if not paths:
             return None
-        try:
-            return PipelineSession.from_dict(json.loads(paths[-1].read_text(encoding="utf-8")))
-        except (json.JSONDecodeError, KeyError, ValueError):
-            return None
+        for path in reversed(paths):
+            try:
+                return PipelineSession.from_dict(json.loads(path.read_text(encoding="utf-8")))
+            except (json.JSONDecodeError, KeyError, ValueError, TypeError):
+                continue
+        return None

Also applies to: 168-177

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline_session.py` around lines 149 - 152, The save()
implementation writes session JSON in-place which can produce a
truncated/corrupt newest file; change save(session: PipelineSession) to write to
a temporary file in the same directory (e.g.,
_session_path(session.session_id).with_suffix(".tmp")), flush and fsync the temp
file, then atomically rename/move it to the final path; reference
session.to_dict(), sessions_dir and _session_path. Update load_latest() to not
assume the single lexicographically newest file is valid: list files in
sessions_dir sorted descending, attempt to open and json.loads each in order,
returning the first successfully parsed PipelineSession (skip malformed files
and continue), and only return None if none parse successfully.

Comment on lines +96 to +118
fixer_name = self._find_fixer_name(issue)
item = FixQueueItem(
issue_id=f"issue_{idx:04d}",
file_path=str(issue.file_path),
line_number=issue.line_number,
level=_LEVEL_MAP.get(issue.level, "INFO"),
message=issue.message,
fixer_name=fixer_name or "none",
status=FixStatus.PENDING if fixer_name else FixStatus.SKIPPED,
)
session.fix_queue.append(item)
self.store.save(session)

def _find_fixer_name(self, issue: CodeIssue) -> Optional[str]:
if not self.fix_engine.can_fix(issue):
return None
for name, fixer in self.fix_engine.fixers.items():
try:
result = fixer.preview(issue, "x = 1\n")
if result.success:
return name
except Exception:
continue
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Queue/apply loses the original issue identity.

queue_issues() decides PENDING by probing fixers against "x = 1\n", then apply() rebuilds each queued item as a fresh CodeIssue with IssueCategory.COMPLEXITY and no rule_id/metadata. AutoFixEngine.fix_file() re-checks those reconstructed issues, so non-complexity findings can be queued as fixable and later skipped or misapplied. Persist the original issue payload and preview against the file's real contents instead of round-tripping through a synthetic issue.

Also applies to: 185-197

🧰 Tools
🪛 Ruff (0.15.9)

[error] 117-118: try-except-continue detected, consider logging the exception

(S112)


[warning] 117-117: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline.py` around lines 96 - 118, queue_issues() currently
probes fixers using a synthetic snippet and stores minimal info in FixQueueItem,
and apply() rebuilds a generic CodeIssue causing loss of original identity;
change FixQueueItem to persist the original issue payload (rule_id, category,
metadata, full message, etc.) and have _find_fixer_name call each fixer's
preview against the real file contents (load the file at issue.file_path)
instead of "x = 1\n"; ensure apply() reconstructs the CodeIssue from the stored
original payload (not as IssueCategory.COMPLEXITY) before calling
AutoFixEngine.fix_file(), so AutoFixEngine.fix_file() receives the original
issue with correct rule_id/metadata and previews are based on actual file text.

Comment on lines +136 to +174
backup_session_id: Optional[str] = None
if not dry_run and files_to_fix:
backup_session_id = self.backup_manager.create_backup_session(
description=f"autofix session {session.session_id}"
)
session.backup_session_id = backup_session_id

for file_path_str, items in files_to_fix.items():
file_path = Path(file_path_str)
if not file_path.exists():
for item in items:
item.status = FixStatus.BLOCKED
item.block_reason = "file not found"
session.blocked_fixes.append(item)
continue

issues_for_file = self._queue_items_to_issues(items, file_path)
try:
fixed_code, diff = self.fix_engine.fix_file(
file_path, issues_for_file, dry_run=dry_run, verify=verify
)
except Exception as exc:
logger.warning("fix_file failed for %s: %s", file_path, exc)
for item in items:
item.status = FixStatus.BLOCKED
item.block_reason = str(exc)
session.blocked_fixes.append(item)
continue

if diff:
for item in items:
item.diff = diff
if not dry_run:
item.status = FixStatus.APPLIED
if backup_session_id:
item.backup_path = str(
self.backup_manager.backup_dir / backup_session_id / file_path.name
)
session.applied_fixes.append(item)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

No file backup is taken before non-dry-run writes.

create_backup_session() only gives you a session id here; nothing in this method copies the files that fix_file(..., dry_run=False) is about to overwrite. The new rollback --pipeline-session flow depends on session.backup_session_id, so this can produce sessions that look rollback-capable but have nothing restorable. Back up each target file before writing, then validate the backup session before continuing.

Based on learnings, "All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback". As per coding guidelines, "refactron/core/*.py: Use BackupManager.validate_backup_integrity(session_id) for backup integrity validation, returning (valid_paths, corrupt_paths)".

🧰 Tools
🪛 Ruff (0.15.9)

[warning] 154-154: Unpacked variable fixed_code is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


[warning] 157-157: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline.py` around lines 136 - 174, Before applying fixes
when not dry_run, create the backup session
(backup_manager.create_backup_session) and then back up each target file into
that session prior to calling fix_engine.fix_file; after backing files, call
BackupManager.validate_backup_integrity(session.backup_session_id) and if it
returns any corrupt_paths mark all related FixItems as BLOCKED (set
block_reason) and append them to session.blocked_fixes, then skip applying fixes
for those files. Ensure when setting item.backup_path you point to
backup_manager.backup_dir / session.backup_session_id / file_path.name and only
proceed to call fix_engine.fix_file(file_path, ...) if the backup validation
succeeded.

omsherikar and others added 3 commits April 4, 2026 01:30
`IncrementalAnalysisTracker` was filtering out unchanged files on every
run after the first, causing `refactron analyze` to return zero issues
the second time. Incremental filtering is a programmatic-API optimization
and has no place in the CLI where users expect full results every run.

Both `analysis.py` and `RefactronPipeline.analyze()` now unconditionally
set `cfg.enable_incremental_analysis = False` before running.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
refactron/cli/analysis.py (1)

220-236: ⚠️ Potential issue | 🟠 Major

--format json skips the new session workflow entirely.

The JSON branch exits before this session block runs, so --fix-on never queues anything and no current session is written in CI/scripted mode. Move the session persistence ahead of the JSON SystemExit so the new pipeline behavior is format-independent.

Also applies to: 304-323

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/analysis.py` around lines 220 - 236, The JSON output branch
currently prints payload and calls raise SystemExit before the session
persistence logic runs, which prevents --fix-on from queuing fixes and stops the
current session from being written; move the session save/queueing code (the
session persistence logic used elsewhere in this file) to execute before the
JSON branch's SystemExit so that regardless of output_format (including the
other JSON block at lines ~304-323) the code that enqueues fixes and writes the
current session runs prior to exiting; ensure you reference and invoke the same
session persistence routine used by the normal output path and then keep the
click.echo(_json.dumps(payload, indent=2)) and raise SystemExit(1 if
summary["critical"] > 0 else 0) after that.
♻️ Duplicate comments (7)
refactron/core/pipeline_session.py (3)

142-144: ⚠️ Potential issue | 🟡 Minor

Add the missing -> None on SessionStore.__init__.

This constructor is still untyped, which is enough to trip disallow_untyped_defs under refactron/.

Proposed fix
-    def __init__(self, root_dir: Optional[Path] = None):
+    def __init__(self, root_dir: Optional[Path] = None) -> None:
         self.root_dir = Path(root_dir) if root_dir else Path.cwd()
         self.sessions_dir = self.root_dir / self.SESSIONS_DIR

As per coding guidelines, "refactron/**/*.py: Type annotations are required in refactron/ with mypy disallow_untyped_defs = true enabled".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline_session.py` around lines 142 - 144, The constructor
SessionStore.__init__ is missing a return type annotation; add the explicit "->
None" to the __init__ signature in the SessionStore class (the method that sets
self.root_dir and self.sessions_dir and references self.SESSIONS_DIR) so the
method is fully typed and satisfies disallow_untyped_defs under refactron.

149-152: ⚠️ Potential issue | 🟠 Major

Make session persistence atomic and skip corrupt newest files.

save() still writes JSON in place, and load_latest() still only attempts to parse the single newest file. If the process dies mid-write, one truncated newest session makes every load_latest() caller return None even when older valid sessions still exist.

Proposed fix
+import os
@@
     def save(self, session: PipelineSession) -> None:
         self.sessions_dir.mkdir(parents=True, exist_ok=True)
         path = self._session_path(session.session_id)
-        path.write_text(json.dumps(session.to_dict(), indent=2), encoding="utf-8")
+        tmp_path = path.with_suffix(".json.tmp")
+        with tmp_path.open("w", encoding="utf-8") as fh:
+            json.dump(session.to_dict(), fh, indent=2)
+            fh.flush()
+            os.fsync(fh.fileno())
+        tmp_path.replace(path)
@@
     def load_latest(self) -> Optional[PipelineSession]:
         if not self.sessions_dir.exists():
             return None
         paths = sorted(self.sessions_dir.glob("*.json"))
         if not paths:
             return None
-        try:
-            return PipelineSession.from_dict(json.loads(paths[-1].read_text(encoding="utf-8")))
-        except (json.JSONDecodeError, KeyError, ValueError):
-            return None
+        for path in reversed(paths):
+            try:
+                return PipelineSession.from_dict(json.loads(path.read_text(encoding="utf-8")))
+            except (json.JSONDecodeError, KeyError, ValueError, TypeError):
+                continue
+        return None

Also applies to: 168-177

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline_session.py` around lines 149 - 152, The save()
implementation writes session JSON in-place and can leave a truncated newest
file; update save() (function save and helper _session_path/sessions_dir) to
write atomically by writing to a temporary file in the same sessions_dir (e.g.,
session_id + ".tmp" or use a temp name), fsync the file, then os.replace/rename
to the final path to ensure an atomic swap; then modify load_latest() to iterate
session files from newest to oldest (by mtime or lexicographic order),
attempting to read and json.loads each file and skipping any that raise JSON
parse errors or IO errors until a valid session is found (returning None only if
none valid).

146-147: ⚠️ Potential issue | 🟠 Major

Validate session_id before turning it into a path.

_session_path() still interpolates raw ids into a filesystem path, so values like ../other/session can escape .refactron/sessions. load() also still returns the parsed object even when its embedded session_id does not match the requested id, which means a tampered JSON file can be treated as the requested session.

Proposed fix
+import re
@@
+_SESSION_ID_RE = re.compile(r"^sess_\d{8}_\d{6}_\d{6}$")
+
@@
     def _session_path(self, session_id: str) -> Path:
+        if not _SESSION_ID_RE.fullmatch(session_id):
+            raise ValueError(f"Invalid session id: {session_id}")
         return self.sessions_dir / f"{session_id}.json"
@@
         if session.session_id != session_id:
             _logger.warning(
                 "Session ID mismatch: requested %s, got %s", session_id, session.session_id
             )
-        return session
+            return None
+        return session

Also applies to: 154-166

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline_session.py` around lines 146 - 147, Validate
session_id before constructing file paths and when loading JSON: in
_session_path(session_id) ensure session_id contains only a safe token (e.g.,
alphanumeric, hyphen/underscore) or normalize and reject any value that would
traverse directories (prevent characters like '/' or '..'); build the path only
after validation (instead of directly using sessions_dir /
f"{session_id}.json"). In the load(...) method, after reading and parsing the
file, verify the parsed object's embedded session_id equals the requested
session_id and raise an error if it differs (do not return parsed data on
mismatch). Use the existing symbols sessions_dir, _session_path, and load to
locate where to add validation and the integrity check, and raise a clear
ValueError or custom exception when validation fails.
refactron/cli/status.py (2)

108-112: ⚠️ Potential issue | 🟠 Major

The rollback hint should use --pipeline-session.

rollback --session is the backup-session path, so this generated next step will not undo the pipeline session being displayed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/status.py` around lines 108 - 112, The rollback hint currently
prints "refactron rollback --session {session.session_id}" which is the
backup-session flag; change it to use the pipeline-session flag so the displayed
pipeline session can be undone. Update the string in the status display (where
console.print is called for the applied branch) to use "--pipeline-session
{session.session_id}" instead of "--session", ensuring the condition that prints
when applied remains the same (references: session.session_id, applied, the
console.print call in status.py).

39-67: ⚠️ Potential issue | 🟠 Major

Handle unreadable or corrupt session files at the CLI boundary.

store.list_sessions(), load_current(), load(), and load_latest() can still surface OSError, UnicodeDecodeError, or TypeError from .refactron/. Right now that turns refactron status into a traceback instead of a friendly CLI error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/status.py` around lines 39 - 67, Wrap calls to
store.list_sessions(), store.load(session_id), store.load_current(), and
store.load_latest() in try/except that catches OSError, UnicodeDecodeError, and
TypeError at the CLI boundary in status.py; on exception print a concise
user-friendly error via console.print (e.g., "Error reading session data: <brief
msg>") and return/non-zero exit so a traceback is not shown. Ensure both the
list_sessions branch and the session-loading branch around session_id use this
error handling so unreadable/corrupt files are handled uniformly.
refactron/cli/refactor.py (2)

309-315: ⚠️ Potential issue | 🟠 Major

Resolve sessions and backups from the pipeline's project root, not the caller's cwd.

Both the autofix --session path and the new rollback path still build SessionStore/BackupManager from _target_path.parent or Path.cwd(). Resuming or rolling back the same session from a subdirectory or another shell location will miss the saved session/backups or target the wrong project.

Also applies to: 491-509

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 309 - 315, The session and backup
lookup currently use the caller cwd or _target_path.parent instead of the
pipeline's project root; change construction of SessionStore and BackupManager
to use the RefactronPipeline instance's project_root (use
_pipeline.project_root) so _pipeline.store.load(session_id) and any
BackupManager operations resolve against the actual project root. Update the
creation points that build SessionStore/BackupManager (the code around where
RefactronPipeline is instantiated and the separate block referenced at the other
location) to pass _pipeline.project_root rather than Path.cwd() or
_target_path.parent.

373-376: ⚠️ Potential issue | 🟠 Major

Use --pipeline-session in the undo hint.

The printed command still sends users into the backup-session rollback path instead of the new pipeline rollback flow.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 373 - 376, The undo hint printed when
_applied > 0 uses the old flag and sends users to backup-session rollback;
update the console.print invocation that builds the hint (where
_pipeline_session.session_id is referenced) to show the new flag name
--pipeline-session instead of --session so the suggested command is "refactron
rollback --pipeline-session <session_id>" and points users to the pipeline
rollback flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@refactron/cli/analysis.py`:
- Around line 285-287: The code builds _project_root from the raw target (using
_target_path.parent) which makes nested targets persist the session under the
nested folder; replace that logic with the existing project-root detection
helper used elsewhere (call that helper with the provided target/_target_path to
compute the true repository root) and pass the returned root into
RefactronPipeline(project_root=...) so sessions are persisted under the
repository root; apply the same replacement for the other occurrence around
lines 312-313 (i.e., stop using _target_path.parent and use the shared
project-root helper instead).

In `@refactron/cli/refactor.py`:
- Around line 494-495: The current code unconditionally marks a session
ROLLED_BACK and removes the active session file after calling
BackupManager.rollback_session(), which can return partial failures; change the
logic in the block around _resolved_id (and the similar block at lines ~509-513)
to inspect the result of BackupManager.rollback_session() and only (1) set the
session state to ROLLED_BACK and (2) delete .refactron/current when
rollback_session reports all restores succeeded; if rollback_session reports any
failures, leave the session state unchanged (or set an explicit PARTIAL_ROLLBACK
status if available) and do not remove .refactron/current unless _resolved_id
matches the currently active session and the rollback fully succeeded; reference
functions/vars: BackupManager.rollback_session(), _resolved_id,
pipeline_session_id, _store.get_current_id(), and the ROLLED_BACK state to
locate and update the code paths.
- Around line 306-312: The new session-aware branch constructs RefactronPipeline
using _target_path/_project_root and returns early, which bypasses
_load_config(), the safety_level mapping, and flags like --config, --profile,
and --environment; update the flow so that before creating or returning the
RefactronPipeline instance (_pipeline) you call _load_config() and apply the
resolved configuration and safety_level (or pass them into RefactronPipeline
constructor) so the session-aware path honors --config, --profile, --environment
and --safety-level exactly like the original branch did.
- Around line 348-352: The call to _pipeline.apply(...) is missing the preview
argument so the CLI's preview flag is ignored (causing autofix to write
changes); update the two places where _pipeline.apply is invoked (the call using
_pipeline_session and the similar block further down) to pass preview=preview
along with dry_run=dry_run and verify=verify so the
RefactronPipeline.apply(preview, dry_run, verify, ...) safety-first flow
(preview → backup → apply → optional rollback) is preserved.

---

Outside diff comments:
In `@refactron/cli/analysis.py`:
- Around line 220-236: The JSON output branch currently prints payload and calls
raise SystemExit before the session persistence logic runs, which prevents
--fix-on from queuing fixes and stops the current session from being written;
move the session save/queueing code (the session persistence logic used
elsewhere in this file) to execute before the JSON branch's SystemExit so that
regardless of output_format (including the other JSON block at lines ~304-323)
the code that enqueues fixes and writes the current session runs prior to
exiting; ensure you reference and invoke the same session persistence routine
used by the normal output path and then keep the click.echo(_json.dumps(payload,
indent=2)) and raise SystemExit(1 if summary["critical"] > 0 else 0) after that.

---

Duplicate comments:
In `@refactron/cli/refactor.py`:
- Around line 309-315: The session and backup lookup currently use the caller
cwd or _target_path.parent instead of the pipeline's project root; change
construction of SessionStore and BackupManager to use the RefactronPipeline
instance's project_root (use _pipeline.project_root) so
_pipeline.store.load(session_id) and any BackupManager operations resolve
against the actual project root. Update the creation points that build
SessionStore/BackupManager (the code around where RefactronPipeline is
instantiated and the separate block referenced at the other location) to pass
_pipeline.project_root rather than Path.cwd() or _target_path.parent.
- Around line 373-376: The undo hint printed when _applied > 0 uses the old flag
and sends users to backup-session rollback; update the console.print invocation
that builds the hint (where _pipeline_session.session_id is referenced) to show
the new flag name --pipeline-session instead of --session so the suggested
command is "refactron rollback --pipeline-session <session_id>" and points users
to the pipeline rollback flow.

In `@refactron/cli/status.py`:
- Around line 108-112: The rollback hint currently prints "refactron rollback
--session {session.session_id}" which is the backup-session flag; change it to
use the pipeline-session flag so the displayed pipeline session can be undone.
Update the string in the status display (where console.print is called for the
applied branch) to use "--pipeline-session {session.session_id}" instead of
"--session", ensuring the condition that prints when applied remains the same
(references: session.session_id, applied, the console.print call in status.py).
- Around line 39-67: Wrap calls to store.list_sessions(),
store.load(session_id), store.load_current(), and store.load_latest() in
try/except that catches OSError, UnicodeDecodeError, and TypeError at the CLI
boundary in status.py; on exception print a concise user-friendly error via
console.print (e.g., "Error reading session data: <brief msg>") and
return/non-zero exit so a traceback is not shown. Ensure both the list_sessions
branch and the session-loading branch around session_id use this error handling
so unreadable/corrupt files are handled uniformly.

In `@refactron/core/pipeline_session.py`:
- Around line 142-144: The constructor SessionStore.__init__ is missing a return
type annotation; add the explicit "-> None" to the __init__ signature in the
SessionStore class (the method that sets self.root_dir and self.sessions_dir and
references self.SESSIONS_DIR) so the method is fully typed and satisfies
disallow_untyped_defs under refactron.
- Around line 149-152: The save() implementation writes session JSON in-place
and can leave a truncated newest file; update save() (function save and helper
_session_path/sessions_dir) to write atomically by writing to a temporary file
in the same sessions_dir (e.g., session_id + ".tmp" or use a temp name), fsync
the file, then os.replace/rename to the final path to ensure an atomic swap;
then modify load_latest() to iterate session files from newest to oldest (by
mtime or lexicographic order), attempting to read and json.loads each file and
skipping any that raise JSON parse errors or IO errors until a valid session is
found (returning None only if none valid).
- Around line 146-147: Validate session_id before constructing file paths and
when loading JSON: in _session_path(session_id) ensure session_id contains only
a safe token (e.g., alphanumeric, hyphen/underscore) or normalize and reject any
value that would traverse directories (prevent characters like '/' or '..');
build the path only after validation (instead of directly using sessions_dir /
f"{session_id}.json"). In the load(...) method, after reading and parsing the
file, verify the parsed object's embedded session_id equals the requested
session_id and raise an error if it differs (do not return parsed data on
mismatch). Use the existing symbols sessions_dir, _session_path, and load to
locate where to add validation and the integrity check, and raise a clear
ValueError or custom exception when validation fails.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1cdccfa1-0941-4e0c-b022-a959b38ad3e1

📥 Commits

Reviewing files that changed from the base of the PR and between da7fbed and ab81338.

📒 Files selected for processing (6)
  • refactron/cli/analysis.py
  • refactron/cli/refactor.py
  • refactron/cli/run.py
  • refactron/cli/status.py
  • refactron/core/pipeline_session.py
  • tests/test_cli_status.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/test_cli_status.py
  • refactron/cli/run.py

Comment on lines +285 to +287
_target_path = Path(target) if target else Path.cwd()
_project_root = _target_path if _target_path.is_dir() else _target_path.parent
_pipeline = RefactronPipeline(project_root=_project_root)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Persist the session under the project root, not target.parent.

For nested targets, _project_root becomes the selected directory or file parent, so the session/current pointer lands under that nested path instead of the repository root. refactron status and refactron autofix from the workspace root then won't see the session. Reuse the same project-root detection helper you already use elsewhere instead of rebuilding it from raw target.

Also applies to: 312-313

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/analysis.py` around lines 285 - 287, The code builds
_project_root from the raw target (using _target_path.parent) which makes nested
targets persist the session under the nested folder; replace that logic with the
existing project-root detection helper used elsewhere (call that helper with the
provided target/_target_path to compute the true repository root) and pass the
returned root into RefactronPipeline(project_root=...) so sessions are persisted
under the repository root; apply the same replacement for the other occurrence
around lines 312-313 (i.e., stop using _target_path.parent and use the shared
project-root helper instead).

Comment on lines +306 to +312
# ── Session-aware pipeline ────────────────────────────────────────
from refactron.core.pipeline import RefactronPipeline

_target_path = Path(target) if target else Path.cwd()
_project_root = _target_path if _target_path.is_dir() else _target_path.parent
_pipeline = RefactronPipeline(project_root=_project_root)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The new session-aware path bypasses the command's config and safety settings.

Because this branch returns at Line 378, the old _load_config() call and safety_level mapping are now unreachable. --config, --profile, --environment, and --safety-level no longer affect the new pipeline flow, so the command surface and runtime behavior are out of sync.

Also applies to: 378-404

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 306 - 312, The new session-aware
branch constructs RefactronPipeline using _target_path/_project_root and returns
early, which bypasses _load_config(), the safety_level mapping, and flags like
--config, --profile, and --environment; update the flow so that before creating
or returning the RefactronPipeline instance (_pipeline) you call _load_config()
and apply the resolved configuration and safety_level (or pass them into
RefactronPipeline constructor) so the session-aware path honors --config,
--profile, --environment and --safety-level exactly like the original branch
did.

Comment on lines +348 to +352
_pipeline.apply(
_pipeline_session,
dry_run=dry_run,
verify=verify,
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

autofix now applies by default even though --preview is still the default.

This branch ignores preview and passes only dry_run into RefactronPipeline.apply(). With the current defaults (preview=True, dry_run=False), refactron autofix writes changes immediately instead of previewing them.

Proposed fix
-    _pipeline.apply(
+    _effective_dry_run = dry_run or preview
+
+    _pipeline.apply(
         _pipeline_session,
-        dry_run=dry_run,
+        dry_run=_effective_dry_run,
         verify=verify,
     )
@@
-    if dry_run:
+    if _effective_dry_run:
         console.print("\n[bold]Dry-run complete[/bold]")

Based on learnings, All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback.

Also applies to: 358-366

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 348 - 352, The call to
_pipeline.apply(...) is missing the preview argument so the CLI's preview flag
is ignored (causing autofix to write changes); update the two places where
_pipeline.apply is invoked (the call using _pipeline_session and the similar
block further down) to pass preview=preview along with dry_run=dry_run and
verify=verify so the RefactronPipeline.apply(preview, dry_run, verify, ...)
safety-first flow (preview → backup → apply → optional rollback) is preserved.

Comment on lines +494 to +495
_resolved_id = pipeline_session_id or _store.get_current_id()

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't clear/close the session unless rollback fully succeeded.

BackupManager.rollback_session() can return failed restores, but this branch still marks the session ROLLED_BACK and unconditionally deletes .refactron/current. That misreports partial rollbacks, and it also clears the active session even when the user explicitly rolled back some other session.

Also applies to: 509-513

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 494 - 495, The current code
unconditionally marks a session ROLLED_BACK and removes the active session file
after calling BackupManager.rollback_session(), which can return partial
failures; change the logic in the block around _resolved_id (and the similar
block at lines ~509-513) to inspect the result of
BackupManager.rollback_session() and only (1) set the session state to
ROLLED_BACK and (2) delete .refactron/current when rollback_session reports all
restores succeeded; if rollback_session reports any failures, leave the session
state unchanged (or set an explicit PARTIAL_ROLLBACK status if available) and do
not remove .refactron/current unless _resolved_id matches the currently active
session and the rollback fully succeeded; reference functions/vars:
BackupManager.rollback_session(), _resolved_id, pipeline_session_id,
_store.get_current_id(), and the ROLLED_BACK state to locate and update the code
paths.

omsherikar and others added 3 commits April 4, 2026 02:00
Previously `analyze` only queued issues when `--fix-on` was passed,
leaving the session empty and causing `autofix --dry-run` to silently
report "Dry-run complete" with nothing shown.

Now:
- `analyze` always queues all discovered issues into the session
- `autofix --fix-on CRITICAL` (default) filters which queued items
  actually get applied, marking the rest as skipped
- `autofix` shows a clear message when no items match the threshold
  and suggests `--fix-on WARNING` as next step
- Dry-run output shows the count and a helpful fallback message

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously the output was misleading - "0 items queued" and "No fixable
issues" with no explanation. Now:
- Shows: N issues · M have automated fixers · K no fixer available
- When M=0, explains which issue types CAN be auto-fixed and why
  complexity/code-smell issues require manual refactoring
- Removes redundant "Fixing issues at X+ level (N items)" noise line

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (8)
refactron/cli/refactor.py (5)

411-415: ⚠️ Potential issue | 🟠 Major

Use --pipeline-session in the undo hint.

The printed command still uses --session, which routes users into the legacy backup-session flow instead of the new pipeline rollback path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 411 - 415, The undo hint prints a
legacy flag (--session) causing the rollback to use the old backup-session flow;
update the printed command in the block that checks `_applied > 0` (where
`console.print` references `_pipeline_session.session_id`) to use
`--pipeline-session` instead of `--session` so the hint points to the new
pipeline rollback path.

547-551: ⚠️ Potential issue | 🟠 Major

Only mark the session rolled back after a complete restore.

BackupManager.rollback_session() reports failed files, but this code always sets ROLLED_BACK and clears the current pointer. Partial restores then look successful, and rolling back a different session can clear the active one.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 547 - 551, The code unconditionally
marks the pipeline session as rolled back and clears the current pointer after
calling _bm.rollback_session(_pipeline_session.backup_session_id); change this
to inspect the returned values (_restored_count, _failed) and only set
_pipeline_session.state = SessionState.ROLLED_BACK, call
_store.save(_pipeline_session), and _store.clear_current() when the rollback
completed without failures (e.g., _failed == 0 or similar success condition);
otherwise leave the session state unchanged (or set an appropriate
FAILED_ROLLBACK state) and persist that outcome so partial restores aren't
treated as successful.

383-387: ⚠️ Potential issue | 🔴 Critical

Default autofix now applies changes instead of previewing them.

The command still defaults to --preview, but this call only forwards dry_run. With the defaults (preview=True, dry_run=False), refactron autofix becomes a real apply.

Suggested fix
-    _pipeline.apply(
+    _effective_dry_run = dry_run or preview
+
+    _pipeline.apply(
         _pipeline_session,
-        dry_run=dry_run,
+        dry_run=_effective_dry_run,
         verify=verify,
     )
@@
-    if dry_run:
+    if _effective_dry_run:

Based on learnings, "All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback".

Also applies to: 393-404

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 383 - 387, The apply call on
_pipeline is only forwarding dry_run and verify, so the preview flag is ignored
and running the command with default preview=True still performs an actual
apply; update the _pipeline.apply invocation(s) (reference: _pipeline.apply and
_pipeline_session) to also accept and forward the preview/autofix flag (e.g.,
preview=preview or preview=not autofix depending on variable names) so the
pipeline honors preview mode, and make the same change for the other similar
call block around the _pipeline.apply at the 393-404 region.

318-320: ⚠️ Potential issue | 🟠 Major

Resolve sessions and backups from the pipeline's project root.

autofix roots the store off Path(target).parent, while rollback hard-codes Path.cwd() for both SessionStore and BackupManager. Resuming or rolling back the same pipeline session from a different directory will miss the saved session/backups or hit the wrong workspace.

Also applies to: 529-547

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 318 - 320, The pipeline project root
is computed into _project_root and passed to
RefactronPipeline(project_root=_project_root) but SessionStore and BackupManager
are still rooted off Path.cwd() or Path(target).parent in other flows (autofix,
rollback), causing resumed sessions/backups to be looked up in the wrong
workspace; update the code that constructs SessionStore and BackupManager (and
any uses in autofix and rollback handlers) to derive their base directory from
the pipeline's project_root (e.g., use RefactronPipeline.project_root or the
local _project_root variable) instead of hard-coding Path.cwd() or
Path(target).parent—also apply the same change to the other occurrence noted
(lines 529-547) so all session/backup operations consistently use the pipeline
project root.

315-320: ⚠️ Potential issue | 🟠 Major

The session-aware path bypasses the command's config and safety flags.

This branch returns at Line 416 without ever calling _load_config() or mapping --safety-level, and RefactronPipeline is instantiated with its default FixRiskLevel.SAFE. --config, --profile, --environment, and --safety-level therefore stop affecting autofix once the session-based flow is used.

Also applies to: 337-343, 383-416

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 315 - 320, The session-aware branch
currently constructs RefactronPipeline without loading command config or
applying CLI flags, so call _load_config() before returning and map CLI options
(--config, --profile, --environment, --safety-level) into the session flow;
ensure RefactronPipeline is instantiated with the selected risk level (pass the
mapped FixRiskLevel value instead of relying on the default) and preserve any
config-derived settings when creating _pipeline using _project_root/_target_path
so the session path respects config and safety flags.
refactron/core/pipeline.py (2)

140-178: ⚠️ Potential issue | 🔴 Critical

Take and validate real backups before any non-dry-run write.

This branch only creates a backup session id; it never copies the target files before fix_file(..., dry_run=False) can overwrite them. That leaves session.backup_session_id pointing at a rollback path that may have nothing restorable. Back up each target first, then call BackupManager.validate_backup_integrity(session.backup_session_id) and block applies if any path is corrupt.

Based on learnings, "All refactoring must go through safety-first pipeline: preview → backup → apply → optional rollback". As per coding guidelines, "refactron/core/*.py: Use BackupManager.validate_backup_integrity(session_id) for backup integrity validation, returning (valid_paths, corrupt_paths)`."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline.py` around lines 140 - 178, The code creates a backup
session but never actually copies files before calling fix_engine.fix_file, so
create per-file backups immediately after obtaining backup_session_id (use
backup_manager.create_backup_session and backup_manager.backup_dir) and before
calling fix_engine.fix_file; then call
BackupManager.validate_backup_integrity(session.backup_session_id) to get
(valid_paths, corrupt_paths) and if any corrupt_paths exist mark the
corresponding FixItem(s) as BLOCKED with block_reason and add them to
session.blocked_fixes, only proceeding to call fix_engine.fix_file for files
whose backups validated; ensure session.backup_session_id remains set and
item.backup_path points to backup_manager.backup_dir / backup_session_id /
file_path.name for applied items.

97-123: ⚠️ Potential issue | 🟠 Major

Keep the original issue payload all the way through apply().

queue_issues() only persists a reduced shape, _find_fixer_name() probes fixers against "x = 1\n", and _queue_items_to_issues() later recreates everything as IssueCategory.COMPLEXITY. That means fix_file() sees a different issue than the analyzer produced, so items can be queued as fixable and then skipped or misapplied when you actually apply them.

Also applies to: 156-160, 189-200

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/core/pipeline.py` around lines 97 - 123, queue_issues() is
truncating and altering CodeIssue data (storing only a reduced FixQueueItem and
probing fixers with a dummy "x = 1\n" in _find_fixer_name()), causing mismatches
when _queue_items_to_issues() reconstructs issues and apply() sees different
payloads; fix by persisting the full original issue payload (serialize the
entire CodeIssue) into the queue item instead of losing fields, change
_find_fixer_name() to probe fixers using the actual issue source/content (or
skip probing and record candidate fixer names without mutating payload), and
ensure _queue_items_to_issues() deserializes and restores the original
IssueCategory and all fields so apply(), FixEngine.fixers, and FixStatus logic
operate on the exact analyzer-produced CodeIssue throughout.
refactron/cli/analysis.py (1)

287-289: ⚠️ Potential issue | 🟠 Major

Persist the session under the actual project root.

Using Path(target)/.parent here makes .refactron land under the selected subdirectory or file parent instead of the repository root. Commands run later from the workspace root then miss the current session.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/analysis.py` around lines 287 - 289, The current logic sets
_project_root from Path(target) or its parent which places the session
(.refactron) under a selected subdirectory; update this to resolve and use the
repository root instead so sessions persist at the real project root. Replace
the _project_root computation used to construct
RefactronPipeline(project_root=_project_root) with code that finds the repo root
(e.g., walk parents from Path(target).resolve() or call git to get the
top-level) and set _project_root to that repository root before instantiating
RefactronPipeline; reference and update the symbols _target_path, _project_root,
and the RefactronPipeline instantiation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@refactron/cli/analysis.py`:
- Around line 292-304: When constructing the PipelineSession instance by hand,
explicitly set its lifecycle/status to SessionState.ANALYZED (or the dataclass
field name used for lifecycle, e.g., "status" or "state") so it matches
RefactronPipeline.analyze() behavior; update the PipelineSession(...) call in
analysis.py to include the lifecycle field with SessionState.ANALYZED and import
or reference SessionState where used (ensure you use the same enum member name
as in RefactronPipeline.analyze to keep readers and status checks consistent).
- Around line 100-106: The --fix-on flag is parsed but never used when saving
the queued session; modify the call to queue_issues so it respects the parsed
fix_on threshold by passing the computed minimum severity (from _FIX_LEVEL_MAP
using the fix_on value, e.g., _FIX_LEVEL_MAP[fix_on.upper()] when fix_on is not
None) as the min_level argument to queue_issues(_pipeline_session, _all_issues,
min_level=...), and ensure the same change is applied to the other code path(s)
mentioned (lines ~280-309) where queue_issues is invoked so the saved session
actually filters issues by the user-specified level.

In `@refactron/cli/refactor.py`:
- Around line 529-559: The rollback branch currently triggers as soon as
_resolved_id is set (via pipeline_session_id or SessionStore.get_current_id()),
which runs before handling explicit rollback modes like --list, --clear,
--use-git or legacy positional IDs; move or guard this logic so it only executes
when the user actually requested a rollback action (e.g., when a specific
rollback flag/mode is set), not merely when an active session exists: modify the
control flow around _resolved_id, SessionStore, and the block that uses
BackupManager.rollback_session so that flag checks for
list/clear/use-git/positional-id are evaluated first and only when none apply
and the explicit rollback mode is present do you load the session, validate
backup_session_id, prompt for confirmation, and call
BackupManager.rollback_session; ensure functions/classes referenced are
SessionStore, _resolved_id/pipeline_session_id, BackupManager.rollback_session,
and _pipeline_session are used in the guarded branch.
- Around line 354-364: The current loop mutates stored session items (setting
_item.status = _FixStatus.SKIPPED) which persists the --fix-on filter; instead
leave _pipeline_session.fix_queue unchanged and apply the threshold only
transiently. Replace the in-place mutation with logic that computes a local view
or predicates: use _LEVEL_RANK, _threshold and _FixStatus to count pending items
whose level meets the threshold (e.g., count items where item.status ==
_FixStatus.PENDING and _LEVEL_RANK.get(item.level.upper(),0) >= _threshold) and
pass that filtered/local view into the apply step or consumer; do not write back
SKIPPED into _pipeline_session.fix_queue. Ensure all references to
_pending_count and any downstream use operate on the transient/filtered view
rather than mutated stored items.

In `@refactron/core/pipeline.py`:
- Around line 12-24: The import block in refactron/core/pipeline.py is not
sorted per repository conventions; re-run isort (profile='black') on this module
to reorder the imports (the block importing AutoFixEngine, FixRiskLevel,
BackupManager, CodeIssue, IssueCategory, IssueLevel, FixQueueItem, FixStatus,
PipelineSession, SessionState, SessionStore, RefactronConfig, and Refactron) so
they follow the project's isort/black profile ordering and satisfy pre-commit.

---

Duplicate comments:
In `@refactron/cli/analysis.py`:
- Around line 287-289: The current logic sets _project_root from Path(target) or
its parent which places the session (.refactron) under a selected subdirectory;
update this to resolve and use the repository root instead so sessions persist
at the real project root. Replace the _project_root computation used to
construct RefactronPipeline(project_root=_project_root) with code that finds the
repo root (e.g., walk parents from Path(target).resolve() or call git to get the
top-level) and set _project_root to that repository root before instantiating
RefactronPipeline; reference and update the symbols _target_path, _project_root,
and the RefactronPipeline instantiation.

In `@refactron/cli/refactor.py`:
- Around line 411-415: The undo hint prints a legacy flag (--session) causing
the rollback to use the old backup-session flow; update the printed command in
the block that checks `_applied > 0` (where `console.print` references
`_pipeline_session.session_id`) to use `--pipeline-session` instead of
`--session` so the hint points to the new pipeline rollback path.
- Around line 547-551: The code unconditionally marks the pipeline session as
rolled back and clears the current pointer after calling
_bm.rollback_session(_pipeline_session.backup_session_id); change this to
inspect the returned values (_restored_count, _failed) and only set
_pipeline_session.state = SessionState.ROLLED_BACK, call
_store.save(_pipeline_session), and _store.clear_current() when the rollback
completed without failures (e.g., _failed == 0 or similar success condition);
otherwise leave the session state unchanged (or set an appropriate
FAILED_ROLLBACK state) and persist that outcome so partial restores aren't
treated as successful.
- Around line 383-387: The apply call on _pipeline is only forwarding dry_run
and verify, so the preview flag is ignored and running the command with default
preview=True still performs an actual apply; update the _pipeline.apply
invocation(s) (reference: _pipeline.apply and _pipeline_session) to also accept
and forward the preview/autofix flag (e.g., preview=preview or preview=not
autofix depending on variable names) so the pipeline honors preview mode, and
make the same change for the other similar call block around the _pipeline.apply
at the 393-404 region.
- Around line 318-320: The pipeline project root is computed into _project_root
and passed to RefactronPipeline(project_root=_project_root) but SessionStore and
BackupManager are still rooted off Path.cwd() or Path(target).parent in other
flows (autofix, rollback), causing resumed sessions/backups to be looked up in
the wrong workspace; update the code that constructs SessionStore and
BackupManager (and any uses in autofix and rollback handlers) to derive their
base directory from the pipeline's project_root (e.g., use
RefactronPipeline.project_root or the local _project_root variable) instead of
hard-coding Path.cwd() or Path(target).parent—also apply the same change to the
other occurrence noted (lines 529-547) so all session/backup operations
consistently use the pipeline project root.
- Around line 315-320: The session-aware branch currently constructs
RefactronPipeline without loading command config or applying CLI flags, so call
_load_config() before returning and map CLI options (--config, --profile,
--environment, --safety-level) into the session flow; ensure RefactronPipeline
is instantiated with the selected risk level (pass the mapped FixRiskLevel value
instead of relying on the default) and preserve any config-derived settings when
creating _pipeline using _project_root/_target_path so the session path respects
config and safety flags.

In `@refactron/core/pipeline.py`:
- Around line 140-178: The code creates a backup session but never actually
copies files before calling fix_engine.fix_file, so create per-file backups
immediately after obtaining backup_session_id (use
backup_manager.create_backup_session and backup_manager.backup_dir) and before
calling fix_engine.fix_file; then call
BackupManager.validate_backup_integrity(session.backup_session_id) to get
(valid_paths, corrupt_paths) and if any corrupt_paths exist mark the
corresponding FixItem(s) as BLOCKED with block_reason and add them to
session.blocked_fixes, only proceeding to call fix_engine.fix_file for files
whose backups validated; ensure session.backup_session_id remains set and
item.backup_path points to backup_manager.backup_dir / backup_session_id /
file_path.name for applied items.
- Around line 97-123: queue_issues() is truncating and altering CodeIssue data
(storing only a reduced FixQueueItem and probing fixers with a dummy "x = 1\n"
in _find_fixer_name()), causing mismatches when _queue_items_to_issues()
reconstructs issues and apply() sees different payloads; fix by persisting the
full original issue payload (serialize the entire CodeIssue) into the queue item
instead of losing fields, change _find_fixer_name() to probe fixers using the
actual issue source/content (or skip probing and record candidate fixer names
without mutating payload), and ensure _queue_items_to_issues() deserializes and
restores the original IssueCategory and all fields so apply(), FixEngine.fixers,
and FixStatus logic operate on the exact analyzer-produced CodeIssue throughout.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 26058cde-73be-4781-945b-4ccad693df62

📥 Commits

Reviewing files that changed from the base of the PR and between ab81338 and 8700632.

📒 Files selected for processing (3)
  • refactron/cli/analysis.py
  • refactron/cli/refactor.py
  • refactron/core/pipeline.py

Comment on lines +100 to +106
@click.option(
"--fix-on",
"fix_on",
type=click.Choice(["CRITICAL", "ERROR", "WARNING", "INFO"], case_sensitive=False),
default=None,
help="Auto-queue issues at this level and above for fixing after analysis.",
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

--fix-on is exposed but never affects the queued session.

The flag is parsed and _FIX_LEVEL_MAP is built, but this path still calls queue_issues(_pipeline_session, _all_issues) with no min_level. The saved session therefore ignores the user's threshold even though the help text says otherwise.

Also applies to: 280-309

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/analysis.py` around lines 100 - 106, The --fix-on flag is
parsed but never used when saving the queued session; modify the call to
queue_issues so it respects the parsed fix_on threshold by passing the computed
minimum severity (from _FIX_LEVEL_MAP using the fix_on value, e.g.,
_FIX_LEVEL_MAP[fix_on.upper()] when fix_on is not None) as the min_level
argument to queue_issues(_pipeline_session, _all_issues, min_level=...), and
ensure the same change is applied to the other code path(s) mentioned (lines
~280-309) where queue_issues is invoked so the saved session actually filters
issues by the user-specified level.

Comment on lines +292 to +304
_pipeline_session = PipelineSession(
session_id=_session_id,
target=str(_target_path),
created_at=datetime.now(timezone.utc).isoformat(),
total_files=summary.get("total_files", 0),
total_issues=summary.get("total_issues", 0),
issues_by_level={
"CRITICAL": summary.get("critical", 0),
"ERROR": summary.get("errors", 0),
"WARNING": summary.get("warnings", 0),
"INFO": summary.get("info", 0),
},
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Set the lifecycle state explicitly when creating the session by hand.

RefactronPipeline.analyze() marks new sessions as SessionState.ANALYZED, but this manual construction relies on the dataclass default instead. A freshly analyzed session can then look pending to status and other session readers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/analysis.py` around lines 292 - 304, When constructing the
PipelineSession instance by hand, explicitly set its lifecycle/status to
SessionState.ANALYZED (or the dataclass field name used for lifecycle, e.g.,
"status" or "state") so it matches RefactronPipeline.analyze() behavior; update
the PipelineSession(...) call in analysis.py to include the lifecycle field with
SessionState.ANALYZED and import or reference SessionState where used (ensure
you use the same enum member name as in RefactronPipeline.analyze to keep
readers and status checks consistent).

Comment on lines +354 to +364
# Filter queue by --fix-on level: mark items below threshold as skipped
_LEVEL_RANK = {"INFO": 0, "WARNING": 1, "ERROR": 2, "CRITICAL": 3}
_threshold = _LEVEL_RANK.get(fix_on.upper(), 3)
from refactron.core.pipeline_session import FixStatus as _FixStatus

for _item in _pipeline_session.fix_queue:
if _item.status == _FixStatus.PENDING:
if _LEVEL_RANK.get(_item.level.upper(), 0) < _threshold:
_item.status = _FixStatus.SKIPPED

_pending_count = len([i for i in _pipeline_session.fix_queue if i.status == _FixStatus.PENDING])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't persist --fix-on filtering into the stored queue.

This mutates queued PENDING items to SKIPPED before apply. After one CRITICAL-only run, a later WARNING run against the same session never sees those lower-severity items again.

🧰 Tools
🪛 GitHub Actions: Pre-commit

[error] black failed: files were modified by this hook (reformatted refactron/cli/refactor.py). Re-run pre-commit to apply changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 354 - 364, The current loop mutates
stored session items (setting _item.status = _FixStatus.SKIPPED) which persists
the --fix-on filter; instead leave _pipeline_session.fix_queue unchanged and
apply the threshold only transiently. Replace the in-place mutation with logic
that computes a local view or predicates: use _LEVEL_RANK, _threshold and
_FixStatus to count pending items whose level meets the threshold (e.g., count
items where item.status == _FixStatus.PENDING and
_LEVEL_RANK.get(item.level.upper(),0) >= _threshold) and pass that
filtered/local view into the apply step or consumer; do not write back SKIPPED
into _pipeline_session.fix_queue. Ensure all references to _pending_count and
any downstream use operate on the transient/filtered view rather than mutated
stored items.

Comment on lines +529 to +559
_store = SessionStore(root_dir=Path.cwd())

# Use explicit ID, else fall back to active workspace session
_resolved_id = pipeline_session_id or _store.get_current_id()

if _resolved_id:
_pipeline_session = _store.load(_resolved_id)
if _pipeline_session is None:
console.print(f"[red]Session not found: {_resolved_id}[/red]")
raise SystemExit(1)
if not _pipeline_session.applied_fixes:
console.print("[yellow]No applied fixes in this session to roll back.[/yellow]")
return
if not _pipeline_session.backup_session_id:
console.print("[red]Session has no backup ID — cannot roll back.[/red]")
raise SystemExit(1)

_bm = BackupManager(root_dir=Path.cwd())
_restored_count, _failed = _bm.rollback_session(_pipeline_session.backup_session_id)

_pipeline_session.state = SessionState.ROLLED_BACK
_store.save(_pipeline_session)
_store.clear_current()

console.print(
f"[green]Rolled back {_restored_count} file(s) from session "
f"{_resolved_id}[/green]"
)
for _f in _failed:
console.print(f"[red] Failed to restore: {_f}[/red]")
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don't auto-execute pipeline rollback before honoring the explicit rollback mode.

Because _resolved_id is handled before --list, --clear, --use-git, or the legacy positional session id are checked, any active current pipeline session makes this branch run immediately. refactron rollback --list can restore files instead of listing them, and the confirmation prompt is skipped entirely.

🧰 Tools
🪛 GitHub Actions: Pre-commit

[error] black failed: files were modified by this hook (reformatted refactron/cli/refactor.py). Re-run pre-commit to apply changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@refactron/cli/refactor.py` around lines 529 - 559, The rollback branch
currently triggers as soon as _resolved_id is set (via pipeline_session_id or
SessionStore.get_current_id()), which runs before handling explicit rollback
modes like --list, --clear, --use-git or legacy positional IDs; move or guard
this logic so it only executes when the user actually requested a rollback
action (e.g., when a specific rollback flag/mode is set), not merely when an
active session exists: modify the control flow around _resolved_id,
SessionStore, and the block that uses BackupManager.rollback_session so that
flag checks for list/clear/use-git/positional-id are evaluated first and only
when none apply and the explicit rollback mode is present do you load the
session, validate backup_session_id, prompt for confirmation, and call
BackupManager.rollback_session; ensure functions/classes referenced are
SessionStore, _resolved_id/pipeline_session_id, BackupManager.rollback_session,
and _pipeline_session are used in the guarded branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants